2 research outputs found

    Extracted features based multi-class classification of orthodontic images

    Get PDF
    The purpose of this study is to investigate computer vision and machine learning methods for classification of orthodontic images in order to provide orthodontists with a solution for multi-class classification of patients’ images to evaluate the evolution of their treatment. Of which, we proposed three algorithms based on extracted features, such as facial features and skin colour using YCbCrcolour space, assigned to nodes of a decision tree to classify orthodontic images: an algorithm for intra-oral images, an algorithm for mould images and an algorithm for extra-oral images. Then, we compared our method by implementing the Local Binary Pattern (LBP) algorithm to extract textural features from images. After that, we applied the principal component analysis (PCA) algorithm to optimize the redundant parameters in order to classify LBP features with six classifiers; Quadratic Support Vector Machine (SVM), Cubic SVM, Radial Basis Function SVM, Cosine K-Nearest Neighbours (KNN), Euclidian KNN, and Linear Discriminant Analysis (LDA). The presented algorithms have been evaluated on a dataset of images of 98 different patients, and experimental results demonstrate the good performances of our proposed method with a high accuracy compared with machine learning algorithms. Where LDA classifier achieves an accuracy of 84.5%

    Threshold-Based Segmentation for Landmark Detection Using CBCT Images

    No full text
    The aim of this study is to examine the influence of threshold-based segmentation on the mean error of automatic landmark detection in 3D CBCT images. A GUI was developed for radiologists, allowing manual landmark identification and visualization of CBCT images. After a threshold-based segmentation, a semi-automatic algorithm for landmark detection was designed using the anatomic definition of each landmark. A step of 50 Hounsfield units was used for threshold variation to assess the detection error. 5 CBCT images were used to validate the proposed approach. The measurement of error detection for one patient was influenced by the threshold variation. For this patient, the error changed from 1.49 mm to 10.32 mm at a low threshold value, while for another patient, the error changed from 1.96 mm to 12.28 mm at high a threshold value. In a CBCT scanner, the choice of threshold value for segmentation can be an important factor in causing error in measurements
    corecore